4 research outputs found

    A Comparison and Joint Analysis of Sunyaev-Zel'dovich Effect Measurements from Planck and Bolocam for a set of 47 Massive Galaxy Clusters

    Get PDF
    We measure the SZ signal toward a set of 47 clusters with a median mass of 9.5×10149.5 \times 10^{14} M⊙_{\odot} and a median redshift of 0.40 using data from Planck and the ground-based Bolocam receiver. When Planck XMM-like masses are used to set the scale radius Ξs\theta_{\textrm{s}}, we find consistency between the integrated SZ signal, Y5R500Y_{\textrm{5R500}}, derived from Bolocam and Planck based on gNFW model fits using A10 shape parameters, with an average ratio of 1.069±0.0301.069 \pm 0.030 (allowing for the ≃5\simeq 5% Bolocam flux calibration uncertainty). We also perform a joint fit to the Bolocam and Planck data using a modified A10 model with the outer logarithmic slope ÎČ\beta allowed to vary, finding ÎČ=6.13±0.16±0.76\beta = 6.13 \pm 0.16 \pm 0.76 (measurement error followed by intrinsic scatter). In addition, we find that the value of ÎČ\beta scales with mass and redshift according to ÎČ∝M0.077±0.026×(1+z)−0.06±0.09\beta \propto M^{0.077 \pm 0.026} \times (1+z)^{-0.06 \pm 0.09}. This mass scaling is in good agreement with recent simulations. We do not observe the strong trend of ÎČ\beta with redshift seen in simulations, though we conclude that this is most likely due to our sample selection. Finally, we use Bolocam measurements of Y500Y_{500} to test the accuracy of the Planck completeness estimate. We find consistency, with the actual number of Planck detections falling approximately 1σ1 \sigma below the expectation from Bolocam. We translate this small difference into a constraint on the the effective mass bias for the Planck cluster cosmology results, with (1−b)=0.93±0.06(1-b) = 0.93 \pm 0.06.Comment: Updated to include one additional co-author. Also some minor changes to the text based on initial feedbac

    A Comparison and Joint Analysis of Sunyaev-Zel'dovich Effect Measurements from Planck and Bolocam for a set of 47 Massive Galaxy Clusters

    Get PDF
    We measure the Sunyaev–Zel'dovich (SZ) signal toward a set of 47 clusters with a median mass of 9.5 × 10^(14) M_☉ and a median redshift of 0.40 using data from Planck and the ground-based Bolocam receiver. When Planck XMM-like masses are used to set the scale radius Ξ_s, we find consistency between the integrated SZ signal, Y_(5R500), derived from Bolocam and Planck based on generalized Navarro, Frenk, and White model fits using A10 shape parameters, with an average ratio of 1.069 ± 0.030 (allowing for the ≃ 5% Bolocam flux calibration uncertainty). We also perform a joint fit to the Bolocam and Planck data using a modified A10 model with the outer logarithmic slope ÎČ allowed to vary, finding ÎČ = 6.13 ± 0.16 ± 0.76 (measurement error followed by intrinsic scatter). In addition, we find that the value of ÎČ scales with mass and redshift according to ÎČ âˆ M^(0.077 ± 0.026) x (1 + z))^(-0.06 ± 0.09). This mass scaling is in good agreement with recent simulations. We do not observe the strong trend of ÎČ with redshift seen in simulations, though we conclude that this is most likely due to our sample selection. Finally, we use Bolocam measurements of Y 500 to test the accuracy of the Planck completeness estimate. We find consistency, with the actual number of Planck detections falling approximately 1σ below the expectation from Bolocam. We translate this small difference into a constraint on the effective mass bias for the Planck cluster cosmology results, with (1 - b)=0.93 ± 0.06

    Using Sports Videos to Showcase Exciting Content to Viewers

    No full text
    In this thesis, we explore the task of generating highlight videos from sports games through the means of assessing the level of excitement of such videos to extract interesting moments from a game as well as utilize NLP techniques to generate captions for such videos. We create pipelines for the extraction of highlight clips using an audio heuristic for which we obtain transcriptions and, using a defined schema for exciting captions, fine-tune pre-trained transformer models to extract the best sentence from the video clip to use as a caption. Our results show improvements over baselines that solely use emotion-prediction categories of input sentences, suggesting our models are able to learn additional features to determine the excitement of captions.M.Eng
    corecore